The Interaction between Memory Allocation and Adaptive Partitioning in Message-Passing Multicomputers

نویسنده

  • Sanjeev Setia
چکیده

Most studies on adaptive partitioning policies for scheduling parallel jobs on distributed memory parallel computers ignore the constraints imposed by the memory requirements of the jobs. In this paper , we rst show that these constraints can have a negative impact on the performance of adaptive partitioning policies. We then evaluate the performance of adaptive partitioning in a system where these minimum processor constraints are eased due to the provision of support for virtual memory. Our primary conclusion is that any performance beneets resulting from the easing of minimum processor constraints imposed by the memory requirements of jobs will be negated by the overhead due to paging.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Message-Passing Distributed Memory Parallel Algorithm for a Dual-Code Thin Layer, Parabolized Navier-Stokes Solver

In this study, the results of parallelization of a 3-D dual code (Thin Layer, Parabolized Navier-Stokes solver) for solving supersonic turbulent flow around body and wing-body combinations are presented. As a serial code, TLNS solver is very time consuming and takes a large part of memory due to the iterative and lengthy computations. Also for complicated geometries, an exceeding number of grid...

متن کامل

Performance and Power Analysis of RCCE Message Passing on the Intel Single-Chip Cloud Computer

The number of cores integrated on a single chip increases with each generation of computers. Traditionally, a single operating system (OS) manages all the cores and resource allocation on a multicore chip. Intel’s Single-chip Cloud Computer (SCC), a manycore processor built for research use with 48 cores, is an implementation of a “cluster-on-chip” architecture. That is, the SCC can be configur...

متن کامل

Reducing Data Communication Overhead for Doacross Loop Nests Reducing Data Communication Overhead for Doacross Loop Nests

If the loop iterations of a loop nest cannot be partitioned into independent sets, the data communication for data dependences are inevitable in order to execute them on parallel machines. This kind of loop nests are referred to as Doacross loop nests. This paper is concerned with compiler algorithms for parallelizing Doacross loop nests for distributed-memory multicomputers. We present a metho...

متن کامل

The performance of fast Givens rotations problem implemented with MPI extensions in multicomputers

In this paper, issues related to implementing an MPI version of the fast Givens rotations problem are investigated. We have chosen this algorithm because it has the feature of having no predictable communication pattern. Message Passing Interface (MPI) is an attempt to standardise the communication library for distributed memory computing systems. The message-passing paradigm is attractive beca...

متن کامل

Virtual-Memory-Mapped Network Interfaces

In today’s multicomputers, software overhead dominates the message-passing latency cost. We designed two multicomputer network interfaces that signif~cantiy reduce this overhead. Both support vMual-memory-mapped communication, allowing user processes to communicate without expensive buffer management and without making system calls across the protection boundary separating user processes from t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995